34 - Recap Clip 6.8: Dynamic Bayesian Networks [ID:30437]
50 von 95 angezeigt

Now, of course, another way to handle these kind of problems is to take the tools we already

have, i.e. Bayesian networks, and basically just make them dynamic.

How do we do that?

Well, we just assume we have one Bayesian network for each of our time slices, and all

of them are isomorphic in the sense of they basically represent the same thing just at

different time steps, but all of the relationships are the same.

And then we get a dynamic Bayesian network.

We'll come to that later, but of course everything I can model by a dynamic Bayesian network

I can also model by a hidden Markov process and the other way around.

Coming back to the umbrella case, this is the dynamic Bayesian network we would put

up for that.

You already know it looks suspiciously like the Markov model we handled earlier.

I'm not sure about this example where it comes from.

Does anybody know what all of this actually means or is intended to mean?

I've forgotten the example behind that.

Well it doesn't matter too much.

We have this example and that's the one I'm going to go with.

The basic idea is we just take a Bayesian network, we index it by the time structure,

and then we can do usual Bayesian network stuff with it.

The nice thing is that every hidden Markov model gives us a dynamic Bayesian network

and conversely every discrete dynamic Bayesian network gives us a hidden Markov model.

But the nice thing about dynamic Bayesian networks or Bayesian networks in general is

that they exploit all of our conditional dependencies.

So in this case that means if we have sparse dependencies between all of the variables

occurring in our networks then dynamic Bayesian networks will have exponentially fewer parameters.

Here's this nice example.

If we assume we have 20 state variables and each of those has three parents in the previous

time slice then my dynamic Bayesian network would only have 160 parameters whereas doing

the same thing as a hidden Markov model would give me 10 to the 12, what's that, a trillion?

I think a trillion parameters.

So this very quickly becomes untractable.

But of course dynamic Bayesian networks have different drawbacks.

The obviously one being that exact inference is rather difficult to do.

How would we go about doing inference in a dynamic Bayesian network?

I mean the naive idea would basically just be take the dynamic Bayesian network, forget

about the time slices and just build one giant Bayesian network by just plugging them in

basically.

Then we get one giant network and then we can do exact inference.

Hooray.

And of course this grows heavily with each time step.

So pretty quickly if I try to do this kind of stuff then doing exact inference becomes

really untractable.

There's some ways to get around that.

One key word how to do this is roll up filtering where the idea is basically we use the variable

elimination.

I think it's been alluded to previously in the exercise but we didn't go in depth.

We'll also not go in depth here.

If you're interested in that you can probably just Google it.

I'm not even sure if it's in Russell Norvig.

If it is you could also look it up there.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:07:19 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:07:59

Sprache

en-US

Recap: Dynamic Bayesian Networks

Main video on the topic in chapter 6 clip 8.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen